Application of Vector-Valued Rational Approximations to the Matrix Eigenvalue Problem and Connections with Krylov Subspace Methods
نویسنده
چکیده
Let F(z) be a vector-valued function F C CN, which is analytic at z 0 and meromorphic in a neighborhood of z 0, and let its Maclaurin series be given. In a recent work [J. Approx. Theory, 76 (1994), pp. 89-111] by the author, vector-valued rational approximation procedures for F(z) that are based on its Maclaurin series, were developed, and some of their convergence properties were analyzed in detail. In particular, a Koenig-type theorem concerning their poles and a de Montessus-type theorem concerning their uniform convergence in the complex plane were given. With the help of these theorems it was shown how optimal approximations to the poles of F(z) and the principal parts of the corresponding Laurent series expansions can be obtained. In this work we use these rational approximation procedures in conjunction with power iterations to develop bona fide generalizations of the power method for an arbitrary N N matrix that may or may not be diagonalizable. These generalizations can be used to obtain simultaneously several of the largest distinct eigenvalues and corresponding eigenvectors and other vectors in the invariant subspaces. We provide interesting constructions for both nondefective and defective eigenvalues and the corresponding invariant subspaces, and present a detailed convergence theory for them. This is made possible by the observation that vectors obtained by power iterations with a matrix are actually coefficients of the Maclaurin series of a vector-valued rational function, whose poles are the reciprocals of some or all of the nonzero eigenvalues of the matrix being considered, while the coefficients in the principal parts of the Laurent expansions of this rational function are vectors in the corresponding invariant subspaces. In addition, it is shown that the generalized power methods of this work are equivalent to some Krylov subspace methods, among them the methods of Arnoldi and Lanczos. Thus, the theory of the present work provides a set of completely new results and constructions for these Krylov subspace methods. At the same time this theory suggests a new mode of usage for these Krylov subspace methods that has been observed to possess computational advantages over their common mode of usage in some cases. We illustrate some of the theory and conclusions derived from it with numerical examples.
منابع مشابه
Iterative Projection Methods for Large–scale Nonlinear Eigenvalue Problems
In this presentation we review iterative projection methods for sparse nonlinear eigenvalue problems which have proven to be very efficient. Here the eigenvalue problem is projected to a subspace V of small dimension which yields approximate eigenpairs. If an error tolerance is not met then the search space V is expanded in an iterative way with the aim that some of the eigenvalues of the reduc...
متن کاملReview of two vector extrapolation methods of polynomial type with applications to large-scale problems
An important problem that arises in different areas of science and engineering is that of computing limits of sequences of vectors {xn}, where xn ∈CN with N very large. Such sequences arise, for example, in the solution of systems of linear or nonlinear equations by fixed-point iterative methods, and limn→∞xn are simply the required solutions. Inmost casesof interest, these sequences converge t...
متن کامل2-Norm Error Bounds and Estimates for Lanczos Approximations to Linear Systems and Rational Matrix Functions
The Lanczos process constructs a sequence of orthonormal vectors vm spanning a nested sequence of Krylov subspaces generated by a hermitian matrix A and some starting vector b. In this paper we show how to cheaply recover a secondary Lanczos process, starting at an arbitrary Lanczos vector vm and how to use this secondary process to efficiently obtain computable error estimates and error bounds...
متن کاملCompact Rational Krylov Methods for Nonlinear Eigenvalue Problems
We propose a new uniform framework of Compact Rational Krylov (CORK) methods for solving large-scale nonlinear eigenvalue problems: A(λ)x = 0. For many years, linearizations are used for solving polynomial and rational eigenvalue problems. On the other hand, for the general nonlinear case, A(λ) can first be approximated by a (rational) matrix polynomial and then a convenient linearization is us...
متن کاملProjection Methods for Nonlinear Sparse Eigenvalue Problems
This paper surveys numerical methods for general sparse nonlinear eigenvalue problems with special emphasis on iterative projection methods like Jacobi–Davidson, Arnoldi or rational Krylov methods and the automated multi–level substructuring. We do not review the rich literature on polynomial eigenproblems which take advantage of a linearization of the problem.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- SIAM J. Matrix Analysis Applications
دوره 16 شماره
صفحات -
تاریخ انتشار 1995